Goto

Collaborating Authors

 symbolic regression


Experimental Design for Missing Physics

Strouwen, Arno, Micluţa-Câmpeanu, Sebastián

arXiv.org Machine Learning

For most process systems, knowledge of the model structure is incomplete. This missing physics must then be learned from experimental data. Recently, a combination of universal differential equations and symbolic regression has become a popular tool to discover these missing physics. Universal differential equations employ neural networks to represent missing parts of the model structure, and symbolic regression aims to make these neural networks interpretable. These machine learning techniques require high-quality data to successfully recover the true model structure. To gather such informative data, a sequential experimental design technique is developed which is based on optimally discriminating between the plausible model structures suggested by symbolic regression. This technique is then applied to discovering the missing physics of a bioreactor.


Bayesian Inference for Missing Physics

Strouwen, Arno

arXiv.org Machine Learning

Model-based approaches for (bio)process systems often suffer from incomplete knowledge of the underlying physical, chemical, or biological laws. Universal differential equations, which embed neural networks within differential equations, have emerged as powerful tools to learn this missing physics from experimental data. However, neural networks are inherently opaque, motivating their post-processing via symbolic regression to obtain interpretable mathematical expressions. Genetic algorithm-based symbolic regression is a popular approach for this post-processing step, but provides only point estimates and cannot quantify the confidence we should place in a discovered equation. We address this limitation by applying Bayesian symbolic regression, which uses Reversible Jump Markov Chain Monte Carlo to sample from the posterior distribution over symbolic expression trees. This approach naturally quantifies uncertainty in the recovered model structure. We demonstrate the methodology on a Lotka-Volterra predator-prey system and then show how a well-designed experiment leads to lower uncertainty in a fed-batch bioreactor case study.



Appendix A Details

Neural Information Processing Systems

More details on each of these datasets are given below. This data is referred to as "in-domain" because the validation data is generated using the same As for cache hits, they are also not counted as visits. Figure 9: MCTS-Guided decoding algorithm for Symbolic Regression with the pre-trained transformer model used for expansion and evaluation steps. MCTS algorithm (Figure 1) which can be used in a similar fashion but without sharing information with the pre-trained transformer. The approach involves fine-tuning an actor-critic-like model to adjust the pre-trained model on a group of symbolic regression instances.